2 research outputs found
Object-based Illumination Estimation with Rendering-aware Neural Networks
We present a scheme for fast environment light estimation from the RGBD
appearance of individual objects and their local image areas. Conventional
inverse rendering is too computationally demanding for real-time applications,
and the performance of purely learning-based techniques may be limited by the
meager input data available from individual objects. To address these issues,
we propose an approach that takes advantage of physical principles from inverse
rendering to constrain the solution, while also utilizing neural networks to
expedite the more computationally expensive portions of its processing, to
increase robustness to noisy input data as well as to improve temporal and
spatial stability. This results in a rendering-aware system that estimates the
local illumination distribution at an object with high accuracy and in real
time. With the estimated lighting, virtual objects can be rendered in AR
scenarios with shading that is consistent to the real scene, leading to
improved realism.Comment: ECCV 202
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution